In this project, I explore gradient domain processing, a technique that manipulates the gradients of an image rather than its raw pixel values. This method is widely used for tasks like seamless blending, tone mapping, and non-photorealistic rendering. The core of this project is Poisson blending, which allows me to integrate an object from a source image into a target image by preserving its gradient structure while adapting to the target’s surroundings. Unlike a simple copy paste method, which creates harsh seams, this approach makes transitions smooth and gives realistic integration. By focusing on gradient differences, I found that I can blend objects naturally, even when their intensity or lighting conditions differ. In addition to Poisson blending, I explore failure cases and some post processing solutions.
Before diving into Poisson image blending, I begin with a 🧩 Toy Problem to build intuition. Instead of blending two images, I reconstruct an image using only its gradient information. The goal is to recover the image while preserving its structure by minimizing the difference between gradients in the reconstructed image and the original source.
1️⃣ Preserve x-gradients: Ensure the horizontal gradients of the reconstructed image \( v \) match those of the source image \( s \).
\[ \min \sum ((v(x+1,y) - v(x,y)) - (s(x+1,y) - s(x,y)))^2 \]2️⃣ Preserve y-gradients: Ensure the vertical gradients of \( v \) match those of \( s \).
\[ \min \sum ((v(x,y+1) - v(x,y)) - (s(x,y+1) - s(x,y)))^2 \]3️⃣ Anchor the reconstruction: Fix the intensity at the upper-left corner to avoid an arbitrary shift in brightness.
\[ \min (v(0,0) - s(0,0))^2 \]By setting up and solving this least squares problem we get the following result:
Input: Original Image
Output: Reconstructed Image
Now that I’ve explored gradient reconstruction, it’s time to apply Poisson blending to seamlessly insert objects into a new scene. Here, I use it to blend curled up loafy orange cats into different bakery scenes 🐱🍞, I expermint with real and generated images, and try to blend into challenging backgrounds.
Source
Target
Naive Blend
Poisson Blend
Source
Target
This image was generated with DALL-E 3
Naive Blend
Poisson Blend
Source
Target
Naive Blend
Poisson Blend
Source
Target
Naive Blend
Poisson Blend
In this example, we notice that Poisson blending is failing because it preserves gradients rather than absolute colors, causing significant color distortion, edge artifacts, and unnatural blending when the source and target backgrounds differ in lighting, color, and texture.
I find the main issues are:
1️⃣ Color shift or bleeding: the cat appears grayish green because the method transfers gradients instead of pixel values, causing unwanted color blending.
2️⃣ Unnatural boundaries or halo effect strong texture and shadow variations in the target disrupt seamless blending.
3️⃣ Texture mismatch: the cats fur and the baking trays sharp reflections have different structures, making Poisson blending struggle to maintain realism.
4️⃣ Lighting inconsistency: the target has warm tones, while the source has cooler tones, leading to unnatural desaturation.
Source
Target
Naive Blend
Poisson Blend
In the following two examples, we notice that Poisson blending failure occurs because the method only matches gradients and ignores intensity, leading to significant color drift where the cat appears too warm and oversaturated compared to the background, causing the source object to take on the local background’s color characteristics instead of maintaining its original tone. To fix this, I wrote preserve_source_color, a post processing function that adjusts the blended results intensity to match the original source. It calculates the average color of the source and blended region within the mask, computes their difference, and applies a color shift to correct the mismatch. This ensures the cat retains its natural color (As seen in Favorite Results) while keeping Poisson blendings seamless edges, making the result more realistic.